77 research outputs found

    Secant acceleration of sequential residual methods for solving large-scale nonlinear systems of equations

    Full text link
    Sequential Residual Methods try to solve nonlinear systems of equations F(x)=0F(x)=0 by iteratively updating the current approximate solution along a residual-related direction. Therefore, memory requirements are minimal and, consequently, these methods are attractive for solving large-scale nonlinear systems. However, the convergence of these algorithms may be slow in critical cases; therefore, acceleration procedures are welcome. In this paper, we suggest to employ a variation of the Sequential Secant Method in order to accelerate Sequential Residual Methods. The performance of the resulting algorithm is illustrated by applying it to the solution of very large problems coming from the discretization of partial differential equations

    Economic inexact restoration for derivative-free expensive function minimization and applications

    Full text link
    The Inexact Restoration approach has proved to be an adequate tool for handling the problem of minimizing an expensive function within an arbitrary feasible set by using different degrees of precision in the objective function. The Inexact Restoration framework allows one to obtain suitable convergence and complexity results for an approach that rationally combines low- and high-precision evaluations. In the present research, it is recognized that many problems with expensive objective functions are nonsmooth and, sometimes, even discontinuous. Having this in mind, the Inexact Restoration approach is extended to the nonsmooth or discontinuous case. Although optimization phases that rely on smoothness cannot be used in this case, basic convergence and complexity results are recovered. A derivative-free optimization phase is defined and the subproblems that arise at this phase are solved using a regularization approach that take advantage of different notions of stationarity. The new methodology is applied to the problem of reproducing a controlled experiment that mimics the failure of a dam

    A MILP model for an extended version of the Flexible Job Shop Problem

    Full text link
    A MILP model for an extended version of the Flexible Job Shop Scheduling problem is proposed. The extension allows the precedences between operations of a job to be given by an arbitrary directed acyclic graph rather than a linear order. The goal is the minimization of the makespan. Theoretical and practical advantages of the proposed model are discussed. Numerical experiments show the performance of a commercial exact solver when applied to the proposed model. The new model is also compared with a simple extension of the model described by \"Ozg\"uven, \"Ozbakir, and Yavuz (Mathematical models for job-shop scheduling problems with routing and process plan flexibility, Applied Mathematical Modelling, 34:1539--1548, 2010), using instances from the literature and instances inspired by real data from the printing industry.Comment: 15 pages, 2 figures, 4 tables. Optimization Letters, 201

    Metaheuristics for the Online Printing Shop Scheduling Problem - Supplementary Material

    Get PDF
    This document presents further numerical results of the experiments concerning the classical instances of the flexible job shop scheduling problem, performed in (Lunardi et al., Metaheuristics for the Online Printing Shop Scheduling Problem, submitted). Additionally, this document gathers the best makespan values (upper bounds and lower bounds) found by state-of-the-art algorithms

    Spectral Projected Gradient Methods: Review and Perspectives

    Get PDF
    Over the last two decades, it has been observed that using the gradient vector as a search direction in large-scale optimization may lead to efficient algorithms. The effectiveness relies on choosing the step lengths according to novel ideas that are related to the spectrum of the underlying local Hessian rather than related to the standard decrease in the objective function. A review of these so-called spectral projected gradient methods for convex constrained optimization is presented. To illustrate the performance of these low-cost schemes, an optimization problem on the set of positive definite matrices is described

    Improving ultimate convergence of an Augmented Lagrangian method

    No full text
    Optimization methods that employ the classical Powell-Hestenes-Rockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of Interior-Point Newtonian algorithms, which are asymptotically faster. In the present research a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its “pure” counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the Interior Point method is replaced by the Newtonian resolution of a KKT system identified by the Augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page
    corecore